25 research outputs found

    Air Pollutants around an Animal Feed Processing Facility in Nacogdoches, TX: A Study on Their Effect on Local Outdoor Air Quality

    Get PDF
    In this exploratory study, different odorous compounds were measured near TFP Nutrition to better understand the impact of odor on local outdoor air quality. TFP Nutrition produces pet feed, livestock feed, and agricultural fertilizer for local brand Lone Star Feeds. It is known in the Nacogdoches area for producing powerful odors near its facilities. It is in the downtown area of the city and in proximity to an elementary school, recreational softball fields, and residential homes. Odors can be connected to the presence of air pollutants. This study on air quality was performed to quantify this data on odors for public education, health purposes, and further research if necessary. Two Nasal Ranger® Field Olfactometers (St. Croix Sensory) were used simultaneously: one evaluated general odor while the other evaluated ammonia (NH ₃ ) odors. In addition, weather conditions, which included temperature, wind direction, and wind speed, were collected using two pocket weather trackers (Model 4500, Kestrel). Sampling occurred twice per week at five different locations near the plant. Notable findings included dilution to threshold (D/T) ratios at the highest possible value of 60 at certain locations on different days. Overall, the highest D/T ratios for both categories of odors (general and ammonia) were found at location 4 southwest of the facility. Wind direction seemed to make a large impact as the highest D/T ratios were detected at times the wind traveled in a direction from the facility toward the sample locations. An important discovery was that each time a D/T ratio greater than 2 was detected on the Nasal Ranger evaluating general odor, a D/T ratio of equal or lesser value was also detected on the Nasal Ranger evaluating ammonia odor, making the connection that much of the odor from this facility may be related to ammonia

    Applying and Interpreting Mixture Distribution Latent State-Trait Models

    Get PDF
    Latent state-trait (LST) models are commonly applied to determine the extent to which observed variables reflect trait-like versus state-like constructs. Mixture distribution LST (M-LST) models relax the assumption of population homogeneity made in traditional LST models, allowing researchers to identify subpopulations (latent classes) with differing trait- and state-like attributes. Applications of M-LST models are scarce, presumably because of the analysis complexity. We present a step-by-step tutorial for evaluating M-LST models based on an application to mother, father, and teacher reports of children’s inattention (n = 811). In the application, we found three latent classes for mother and father reports and four classes for teacher reports. All reporter solutions contained classes with very low, low, and moderate levels of inattention. The teacher solution also contained a class with high inattention. Comparable mother and father (but not teacher) classes exhibited similar levels of trait and state variance

    Permutation Equivariant Neural Functionals

    Full text link
    This work studies the design of neural networks that can process the weights or gradients of other neural networks, which we refer to as neural functional networks (NFNs). Despite a wide range of potential applications, including learned optimization, processing implicit neural representations, network editing, and policy evaluation, there are few unifying principles for designing effective architectures that process the weights of other networks. We approach the design of neural functionals through the lens of symmetry, in particular by focusing on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order. We introduce a framework for building permutation equivariant neural functionals, whose architectures encode these symmetries as an inductive bias. The key building blocks of this framework are NF-Layers (neural functional layers) that we constrain to be permutation equivariant through an appropriate parameter sharing scheme. In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks that require processing the weights of MLPs and CNNs, such as predicting classifier generalization, producing "winning ticket" sparsity masks for initializations, and classifying or editing implicit neural representations (INRs). In addition, we provide code for our models and experiments at https://github.com/AllanYangZhou/nfn.Comment: To appear in Neural Information Processing Systems (NeurIPS), 202

    Neural Functional Transformers

    Full text link
    The recent success of neural networks as implicit representation of data has driven growing interest in neural functionals: models that can process other neural networks as input by operating directly over their weight spaces. Nevertheless, constructing expressive and efficient neural functional architectures that can handle high-dimensional weight-space objects remains challenging. This paper uses the attention mechanism to define a novel set of permutation equivariant weight-space layers and composes them into deep equivariant models called neural functional Transformers (NFTs). NFTs respect weight-space permutation symmetries while incorporating the advantages of attention, which have exhibited remarkable success across multiple domains. In experiments processing the weights of feedforward MLPs and CNNs, we find that NFTs match or exceed the performance of prior weight-space methods. We also leverage NFTs to develop Inr2Array, a novel method for computing permutation invariant latent representations from the weights of implicit neural representations (INRs). Our proposed method improves INR classification accuracy by up to +17%+17\% over existing methods. We provide an implementation of our layers at https://github.com/AllanYangZhou/nfn

    Towards Fairer Datasets: Filtering and Balancing the Distribution of the People Subtree in the ImageNet Hierarchy

    Full text link
    Computer vision technology is being used by many but remains representative of only a few. People have reported misbehavior of computer vision models, including offensive prediction results and lower performance for underrepresented groups. Current computer vision models are typically developed using datasets consisting of manually annotated images or videos; the data and label distributions in these datasets are critical to the models' behavior. In this paper, we examine ImageNet, a large-scale ontology of images that has spurred the development of many modern computer vision methods. We consider three key factors within the "person" subtree of ImageNet that may lead to problematic behavior in downstream computer vision technology: (1) the stagnant concept vocabulary of WordNet, (2) the attempt at exhaustive illustration of all categories with images, and (3) the inequality of representation in the images within concepts. We seek to illuminate the root causes of these concerns and take the first steps to mitigate them constructively.Comment: Accepted to FAT* 202
    corecore